Goto

Collaborating Authors

 rigorous analysis


Plug-in Estimation in High-Dimensional Linear Inverse Problems: A Rigorous Analysis

Neural Information Processing Systems

Estimating a vector $\mathbf{x}$ from noisy linear measurements $\mathbf{Ax+w}$ often requires use of prior knowledge or structural constraints on $\mathbf{x}$ for accurate reconstruction. Several recent works have considered combining linear least-squares estimation with a generic or plug-in ``denoiser function that can be designed in a modular manner based on the prior knowledge about $\mathbf{x}$. While these methods have shown excellent performance, it has been difficult to obtain rigorous performance guarantees. This work considers plug-in denoising combined with the recently-developed Vector Approximate Message Passing (VAMP) algorithm, which is itself derived via Expectation Propagation techniques. It shown that the mean squared error of this ``plug-in VAMP can be exactly predicted for a large class of high-dimensional random $\Abf$ and denoisers. The method is illustrated in image reconstruction and parametric bilinear estimation.


Towards a Rigorous Analysis of Mutual Information in Contrastive Learning

Lee, Kyungeun, Kim, Jaeill, Kang, Suhyun, Rhee, Wonjong

arXiv.org Artificial Intelligence

Contrastive learning has emerged as a cornerstone in recent achievements of unsupervised representation learning. Its primary paradigm involves an instance discrimination task with a mutual information loss. The loss is known as InfoNCE and it has yielded vital insights into contrastive learning through the lens of mutual information analysis. However, the estimation of mutual information can prove challenging, creating a gap between the elegance of its mathematical foundation and the complexity of its estimation. As a result, drawing rigorous insights or conclusions from mutual information analysis becomes intricate. In this study, we introduce three novel methods and a few related theorems, aimed at enhancing the rigor of mutual information analysis. Despite their simplicity, these methods can carry substantial utility. Leveraging these approaches, we reassess three instances of contrastive learning analysis, illustrating their capacity to facilitate deeper comprehension or to rectify pre-existing misconceptions. Specifically, we investigate small batch size, mutual information as a measure, and the InfoMin principle.


A Rigorous Analysis of Linsker-type Hebbian Learning

Neural Information Processing Systems

We propose a novel rigorous approach for the analysis of Linsker's unsupervised Hebbian learning network. The behavior of this model is determined by the underlying nonlinear dynamics which are parameterized by a set of parameters originating from the Heb(cid:173) bian rule and the arbor density of the synapses. These parameters determine the presence or absence of a specific receptive field (also referred to as a'connection pattern') as a saturated fixed point attractor of the model. In this paper, we perform a qualitative analysis of the underlying nonlinear dynamics over the parameter space, determine the effects of the system parameters on the emer(cid:173) gence of various receptive fields, and predict precisely within which parameter regime the network will have the potential to develop a specially designated connection pattern. In particular, this ap(cid:173) proach exposes, for the first time, the crucial role played by the synaptic density functions, and provides a complete precise picture of the parameter space that defines the relationships among the different receptive fields.


Plug-in Estimation in High-Dimensional Linear Inverse Problems: A Rigorous Analysis

Fletcher, Alyson K., Pandit, Parthe, Rangan, Sundeep, Sarkar, Subrata, Schniter, Philip

Neural Information Processing Systems

Estimating a vector $\mathbf{x}$ from noisy linear measurements $\mathbf{Ax w}$ often requires use of prior knowledge or structural constraints on $\mathbf{x}$ for accurate reconstruction. Several recent works have considered combining linear least-squares estimation with a generic or plug-in denoiser" function that can be designed in a modular manner based on the prior knowledge about $\mathbf{x}$. While these methods have shown excellent performance, it has been difficult to obtain rigorous performance guarantees. This work considers plug-in denoising combined with the recently-developed Vector Approximate Message Passing (VAMP) algorithm, which is itself derived via Expectation Propagation techniques. It shown that the mean squared error of this plug-in" VAMP can be exactly predicted for a large class of high-dimensional random $\Abf$ and denoisers.


A Rigorous Analysis of Linsker-type Hebbian Learning

Feng, J., Pan, H., Roychowdhury, V. P.

Neural Information Processing Systems

We propose a novel rigorous approach for the analysis of Linsker's unsupervised Hebbian learning network. The behavior of this model is determined by the underlying nonlinear dynamics which are parameterized by a set of parameters originating from the Hebbian rule and the arbor density of the synapses. These parameters determine the presence or absence of a specific receptive field (also referred to as a'connection pattern') as a saturated fixed point attractor of the model. In this paper, we perform a qualitative analysis of the underlying nonlinear dynamics over the parameter space, determine the effects of the system parameters on the emergence of various receptive fields, and predict precisely within which parameter regime the network will have the potential to develop a specially designated connection pattern. In particular, this approach exposes, for the first time, the crucial role played by the synaptic density functions, and provides a complete precise picture of the parameter space that defines the relationships among the different receptive fields. Our theoretical predictions are confirmed by numerical simulations.


A Rigorous Analysis of Linsker-type Hebbian Learning

Feng, J., Pan, H., Roychowdhury, V. P.

Neural Information Processing Systems

We propose a novel rigorous approach for the analysis of Linsker's unsupervised Hebbian learning network. The behavior of this model is determined by the underlying nonlinear dynamics which are parameterized by a set of parameters originating from the Hebbian rule and the arbor density of the synapses. These parameters determine the presence or absence of a specific receptive field (also referred to as a'connection pattern') as a saturated fixed point attractor of the model. In this paper, we perform a qualitative analysis of the underlying nonlinear dynamics over the parameter space, determine the effects of the system parameters on the emergence of various receptive fields, and predict precisely within which parameter regime the network will have the potential to develop a specially designated connection pattern. In particular, this approach exposes, for the first time, the crucial role played by the synaptic density functions, and provides a complete precise picture of the parameter space that defines the relationships among the different receptive fields. Our theoretical predictions are confirmed by numerical simulations.


A Rigorous Analysis of Linsker-type Hebbian Learning

Feng, J., Pan, H., Roychowdhury, V. P.

Neural Information Processing Systems

His simulations have shown that for appropriate parameter regimes, several structured connection patterns (e.g., centre-surround and oriented afferent receptive fields (aRFs)) occur progressively as the Hebbian evolution of the weights is carried out layer by layer. The behavior of Linsker's model is determined by the underlying nonlinear dynamics which are parameterized by a set of parameters originating from the Hebbian rule and the arbor density of the synapses.